home *** CD-ROM | disk | FTP | other *** search
- Frequently Asked Questions (FAQS);faqs.458
-
-
-
- For those who would prefer to see this worked out in detail:
- Assume the smaller envelope is uniform on [$0,$M], for some value
- of $M. What is the expectation value of always switching? A quarter of
- the time $100 >= $M (i.e. 50% chance $X is in [$M/2,$M] and 50% chance
- the larger envelope is chosen). In this case the expected switching
- gain is -$50 (a loss). Thus overall the always switch policy has an
- expected (relative to $100) gain of (3/4)*$50 + (1/4)*(-$50) = $25.
- However the expected absolute gain (in terms of M) is:
- / M
- | g f(g) dg, [ where f(g) = (1/2)*Uniform[0,M)(g) +
- /-M (1/2)*Uniform(-M,0](g). ]
-
- = 0. QED.
-
- OK, so always switching is not the optimal switching strategy. Surely
- there must be some strategy that takes advantage of the fact that we
- looked into the envelope and we know something we did not know before
- we looked.
-
- Well, if we know the maximum value $M that can be in the smaller envelope,
- then the optimal decision criterion is to switch if $100 < $M, otherwise stick.
- The reason for the stick case is straightforward. The reason for the
- switch case is due to the pdf of the smaller envelope being twice as
- high as that of the larger envelope over the range [0,$M). That is, the
- expected gain in switching is (2/3)*$100 + (1/3)*(-$50) = $50.
-
- What if we do not know the maximum value of the pdf? You can exploit
- the "test value" technique to improve your chances. The trick here is
- to pick a test value T. If the amount in the envelope is less than the
- test value, switch; if it is more, do not. This works in that if T happens
- to be in the range [M,2M] you will make the correct decision. Therefore,
- assuming the unknown pdf is uniform on [0,M], you are slightly better off
- with this technique.
-
- Of course, the pdf may not even be uniform, so the "test value" technique
- may not offer much of an advantage. If you are allowed to play the game
- repeatedly, you can estimate the pdf, but that is another story...
-
- ==> decision/exchange.p <==
- At one time, the Mexican and American dollars were devalued by 10 cents on each
- side of the border (i.e. a Mexican dollar was 90 cents in the US, and a US
- dollar was worth 90 cents in Mexico). A man walks into a bar on the American
- side of the border, orders 10 cents worth of beer, and tenders a Mexican dollar
- in change. He then walks across the border to Mexico, orders 10 cents worth of
- beer and tenders a US dollar in change. He continues this throughout the day,
- and ends up dead drunk with the original dollar in his pocket.
-
- Who pays for the drinks?
-
- ==> decision/exchange.s <==
- The man paid for all the drinks. But, you say, he ended up with the same
- amount of money that he started with! However, as he transported Mexican
- dollars into Mexico and US dollars into the US, he performed "economic work"
- by moving the currency to a location where it was in greater demand (and thus
- valued higher). The earnings from this work were spent on the drinks.
-
- Note that he can only continue to do this until the Mexican bar runs out
- of US dollars, or the US bar runs out of Mexican dollars, i.e., until
- he runs out of "work" to do.
-
- ==> decision/newcomb.p <==
- Newcomb's Problem
-
- A being put one thousand dollars in box A and either zero or one million
- dollars in box B and presents you with two choices:
- (1) Open box B only.
- (2) Open both box A and B.
- The being put money in box B only if it predicted you will choose option (1).
- The being put nothing in box B if it predicted you will do anything other than
- choose option (1) (including choosing option (2), flipping a coin, etc.).
-
- Assuming that you have never known the being to be wrong in predicting your
- actions, which option should you choose to maximize the amount of money you
- get?
-
-
- ==> decision/newcomb.s <==
- This is "Newcomb's Paradox".
-
- You are presented with two boxes: one certainly contains $1000 and the
- other might contain $1 million. You can either take one box or both.
- You cannot change what is in the boxes. Therefore, to maximize your
- gain you should take both boxes.
-
- However, it might be argued that you can change the probability that
- the $1 million is there. Since there is no way to change whether the
- million is in the box or not, what does it mean that you can change
- the probability that the million is in the box? It means that your
- choice is correlated with the state of the box.
-
- Events which proceed from a common cause are correlated. My mental
- states lead to my choice and, very probably, to the state of the box.
- Therefore my choice and the state of the box are highly correlated.
- In this sense, my choice changes the "probability" that the money is
- in the box. However, since your choice cannot change the state of the
- box, this correlation is irrelevant.
-
- The following argument might be made: your expected gain if you take
- both boxes is (nearly) $1000, whereas your expected gain if you take
- one box is (nearly) $1 million, therefore you should take one box.
- However, this argument is fallacious. In order to compute the
- expected gain, one would use the formulas:
-
- E(take one) = $0 * P(predict take both | take one) +
- $1,000,000 * P(predict take one | take one)
- E(take both) = $1,000 * P(predict take both | take both) +
- $1,001,000 * P(predict take one | take both)
-
- While you are given that P(do X | predict X) is high, it is not given
- that P(predict X | do X) is high. Indeed, specifying that P(predict X
- | do X) is high would be equivalent to specifying that the being could
- use magic (or reverse causality) to fill the boxes. Therefore, the
- expected gain from either action cannot be determined from the
- information given.
-
-
- ==> decision/prisoners.p <==
- Three prisoners on death row are told that one of them has been chosen
- at random for execution the next day, but the other two are to be
- freed. One privately begs the warden to at least tell him the name of
- one other prisoner who will be freed. The warden relents: 'Susie will
- go free.' Horrified, the first prisoner says that because he is now
- one of only two remaining prisoners at risk, his chances of execution
- have risen from one-third to one-half! Should the warden have kept his
- mouth shut?
-
- ==> decision/prisoners.s <==
- Each prisoner had an equal chance of being the one chosen to be
- executed. So we have three cases:
-
- Prisoner executed: A B C
- Probability of this case: 1/3 1/3 1/3
-
- Now, if A is to be executed, the warden will randomly choose either B or C,
- and tell A that name. When B or C is the one to be executed, there is only
- one prisoner other than A who will not be executed, and the warden will always
- give that name. So now we have:
-
- Prisoner executed: A A B C
- Name given to A: B C C B
- Probability: 1/6 1/6 1/3 1/3
-
- We can calculate all this without knowing the warden's answer.
- When he tells us B will not be executed, we eliminate the middle two
- choices above. Now, among the two remaining cases, C is twice
- as likely as A to be the one executed. Thus, the probability that
- A will be executed is still 1/3, and C's chances are 2/3.
-
- Xref: bloom-picayune.mit.edu rec.puzzles:18140 news.answers:3072
- Newsgroups: rec.puzzles,news.answers
- Path: bloom-picayune.mit.edu!snorkelwacker.mit.edu!usc!sdd.hp.com!elroy.jpl.nasa.gov!ames!haven.umd.edu!uunet!questrel!chris
- From: uunet!questrel!chris (Chris Cole)
- Subject: rec.puzzles FAQ, part 5 of 15
- Message-ID: <puzzles-faq-5_717034101@questrel.com>
- Followup-To: rec.puzzles
- Summary: This posting contains a list of
- Frequently Asked Questions (and their answers).
- It should be read by anyone who wishes to
- post to the rec.puzzles newsgroup.
- Sender: chris@questrel.com (Chris Cole)
- Reply-To: uunet!questrel!faql-comment
- Organization: Questrel, Inc.
- References: <puzzles-faq-1_717034101@questrel.com>
- Date: Mon, 21 Sep 1992 00:08:56 GMT
- Approved: news-answers-request@MIT.Edu
- Expires: Sat, 3 Apr 1993 00:08:21 GMT
- Lines: 1594
-
- Archive-name: puzzles-faq/part05
- Last-modified: 1992/09/20
- Version: 3
-
- ==> decision/red.p <==
- I show you a shuffled deck of standard playing cards, one card at a
- time. At any point before I run out of cards, you must say "RED!".
- If the next card I show is red (i.e. diamonds or hearts), you win. We
- assume I the "dealer" don't have any control over what the order of
- cards is.
-
- The question is, what's the best strategy, and what is your
- probability of winning ?
-
- ==> decision/red.s <==
- If a deck has n cards, r red and b black, the best strategy wins
- with a probability of r/n. Thus, you can say "red" on the first card,
- the last card, or any other card you wish.
- Proof by induction on n. The statement is clearly true for one-card decks.
- Suppose it is true for n-card decks, and add a red card.
- I will even allow a nondeterministic strategy, meaning you say "red"
- on the first card with probability p. With probability 1-p,
- you watch the first card go by, and then apply the "optimal" strategy
- to the remaining n-card deck, since you now know its composition.
- The odds of winning are therefore: p * (r+1)/(n+1) +
- (1-p) * ((r+1)/(n+1) * r/n + b/(n+1) * (r+1)/n).
- After some algebra, this becomes (r+1)/(n+1) as expected.
- Adding a black card yields: p * r/(n+1) +
- (1-p) * (r/(n+1) * (r-1)/n + (b+1)/(n+1) * r/n).
- This becomes r/(n+1) as expected.
-
- ==> decision/rotating.table.p <==
- Four glasses are placed upside down in the four corners of a square
- rotating table. You wish to turn them all in the same direction,
- either all up or all down. You may do so by grasping any two glasses
- and, optionally, turning either over. There are two catches: you are
- blindfolded and the table is spun after each time you touch the
- glasses. How do you do it?
- ==> decision/rotating.table.s <==
- 1. Turn two adjacent glasses up.
- 2. Turn two diagonal glasses up.
- 3. Pull out two diagonal glasses. If one is down, turn it up and you're done.
- If not, turn one down and replace.
- 4. Take two adjacent glasses. Invert them both.
- 5. Take two diagonal glasses. Invert them both.
-
- References
- Probing the Rotating Table"
- W. T. Laaser and L. Ramshaw
- _The Mathematical Gardner_,
- Wadsworth International, Belmont CA 1981.
-
- ... we will see that such a procedure exists if and
- only if the parameters k and n satisfy the inequality
- k >= (1-1/p)n, where p is the largest prime factor
- of n.
-
- The paper mentions (without discussing) two other generalizations:
- more than two orientations of the glasses (Graham and Diaconis)
- and more symmetries in the table, e.g. those of a cube (Kim).
-
- ==> decision/stpetersburg.p <==
- What should you be willing to pay to play a game in which the payoff is
- calculated as follows: a coin is flipped until in comes up heads on the
- nth toss and the payoff is set at 2^n dollars?
-
- ==> decision/stpetersburg.s <==
- Classical decison theory says that you should be willing to pay any
- amount up to the expected value of the wager. Let's calculate the
- expected value: The probability of winning at step n is 2^-n, and the
- payoff at step n is 2^n, so the sum of the products of the
- probabilities and the payoffs is:
-
- E = sum over n (2^-n * 2^n) = sum over n (1) = infinity
-
- So you should be willing to pay any amount to play this game. This is
- called the "St. Petersburg Paradox."
-
- The classical solution to this problem was given by Bernoulli. He
- noted that people's desire for money is not linear in the amount of
- money involved. In other words, people do not desire $2 million twice
- as much as they desire $1 million. Suppose, for example, that people's
- desire for money is a logarithmic function of the amount of money.
- Then the expected VALUE of the game is:
-
- E = sum over n (2^-n * C * log(2^n)) = sum over n (2^-n * C' * n) = C''
-
- Here the C's are constants that depend upon the risk aversion of the
- player, but at least the expected value is finite. However, it turns
- out that these constants are usually much higher than people are really
- willing to pay to play, and in fact it can be shown that any
- non-bounded utility function (map from amount of money to value of
- money) is prey to a generalization of the St. Petersburg paradox. So
- the classical solution of Bernoulli is only part of the story.
-
- The rest of the story lies in the observation that bankrolls are always
- finite, and this dramatically reduces the amount you are willing to bet
- in the St. Petersburg game.
-
- To figure out what would be a fair value to charge for playing the game
- we must know the bank's resources. Assume that the bank has 1 million
- dollars (1*K*K = 2^20). I cannot possibly win more than $1 million
- whether I toss 20 tails in a row or 2000.
-
- Therefore my expected amount of winning is
-
- E = sum n up to 20 (2^-n * 2^n) = sum n up to 20 (1) = $20
-
- and my expected value of winning is
-
- E = sum n up to 20 (2^-n * C * log(2^n)) = some small number
-
- This is much more in keeping with what people would really pay to
- play the game.
-
- Incidentally, T.C. Fry suggested this change to the problem in 1928
- (see W.W.R. Ball, Mathematical Recreations and Essays, N.Y.: Macmillan,
- 1960, pp. 44-45).
-
- The problem remains interesting when modified in this way,
- for the following reason. For a particular value of the bank's
- resources, let
-
- e denote the expected value of the player's winnings; and let
- p denote the probability that the player profits from the game, assuming
- the price of getting into the game is 0.8e (20% discount).
-
- Note that the expected value of the player's profit is 0.2e. Now
- let's vary the bank's resources and observe how e and p change. It
- will be seen that as e (and hence the expected value of the profit)
- increases, p diminishes. The more the game is to the player's
- advantage in terms of expected value of profit, the less likely it is
- that the player will come away with any profit at all. This
- is mildly counterintuitive.
-
- ==> decision/switch.p <==
- Switch? (The Monty Hall Problem)
-
- Two black marbles and a red marble are in a bag. You choose one marble from the
- bag without looking at it. Another person chooses a marble from the bag and it
- is black. You are given a chance to keep the marble you have or switch it with
- the one in the bag. If you want to end up with the red marble, is there an
- advantage to switching? What if the other person looked at the marbles remaining
- in the bag and purposefully selected a black one?
-
- ==> decision/switch.s <==
- Generalize the problem from three marbles to n marbles.
-
- If there are n marbles, your odds of having selected the red one are 1/n. After
- the other person selected a black one at random, your odds go up to 1/(n-1).
- There are n-2 marbles left in the bag, so your odds of selecting the red one
- by switching are 1/(n-2) times the odds that you did not already select it
- (n-2)/(n-1) or 1/(n-1), the same as the odds of already selecting it. Therefore
- there is no advantage to switching.
-
- If the person looked into the bag and selected a black one on purpose, then
- your odds of having selected the red one are not improved, so the odds of
- selecting the red one by switching are 1/(n-2) times (n-1)/n or (n-1)/n(n-2).
- This is (n-1)/(n-2) times better than the odds without switching, so you
- should switch.
-
- This is a clarified version of the Monty Hall "paradox":
-
- You are a participant on "Let's Make a Deal." Monty Hall shows you
- three closed doors. He tells you that two of the closed doors have a
- goat behind them and that one of the doors has a new car behind it.
- You pick one door, but before you open it, Monty opens one of the two
- remaining doors and shows that it hides a goat. He then offers you a
- chance to switch doors with the remaining closed door. Is it to your
- advantage to do so?
-
- The original Monty Hall problem (and solution) appears to be due to
- Steve Selvin, and appears in American Statistician, Feb 1975, V. 29,
- No. 1, p. 67 under the title ``A Problem in Probability.'' It should
- be of no surprise to readers of this group that he received several
- letters contesting the accuracy of his solution, so he responded two
- issues later (American Statistician, Aug 1975, V. 29, No. 3, p. 134).
- I extract a few words of interest, including a response from Monty
- Hall himself:
-
- ... The basis to my solution is that Monty Hall knows which box
- contains the prize and when he can open either of two boxes without
- exposing the prize, he chooses between them at random ...
-
- Benjamin King pointed out the critical assumptions about Monty
- Hall's behavior that are necessary to solve the problem, and
- emphasized that ``the prior distribution is not the only part of
- the probabilistic side of a decision problem that is subjective.''
-
- Monty Hall wrote and expressed that he was not ``a student of
- statistics problems'' but ``the big hole in your argument is that
- once the first box is seen to be empty, the contestant cannot
- exchange his box.'' He continues to say, ``Oh, and incidentally,
- after one [box] is seen to be empty, his chances are not 50/50 but
- remain what they were in the first place, one out of three. It
- just seems to the contestant that one box having been eliminated,
- he stands a better chance. Not so.'' I could not have said it
- better myself.
-
- The basic idea is that the Monty Hall problem is confusing for two
- reasons: first, there are hidden assumptions about Monty's motivation
- that cloud the issue in some peoples' minds; and second, novice probability
- students do not see that the opening of the door gave them any new
- information.
-
- Monty can have one of three basic motives:
- 1. He randomly opens doors.
- 2. He always opens the door he knows contains nothing.
- 3. He only opens a door when the contestant has picked the grand prize.
-
- These result in very different strategies:
- 1. No improvement when switching.
- 2. Double your odds by switching.
- 3. Don't switch!
-
- Most people, myself included, think that (2) is the intended
- interpretation of Monty's motive.
-
- A good way to see that Monty is giving you information by opening doors is to
- increase the number of doors from three to 100. If there are 100 doors,
- and Monty shows that 98 of them are empty, isn't it pretty clear that
- the chance the prize is behind the remaining door is 99/100?
-
- Reference (too numerous to mention, but this one should do):
- Leonard Gillman
- "The Car and the Goats"
- The American Mathematical Monthly, 99:1 (Jan 1992), pp. 3-7.
-
- ==> decision/truel.p <==
- A, B, and C are to fight a three-cornered pistol duel. All know that
- A's chance of hitting his target is 0.3, C's is 0.5, and B never misses.
- They are to fire at their choice of target in succession in the order
- A, B, C, cyclically (but a hit man loses further turns and is no longer
- shot at) until only one man is left. What should A's strategy be?
-
- ==> decision/truel.s <==
- This is problem 20 in Mosteller _Fifty Challenging Problems in Probability_
- and it also appears (with an almost identical solution) on page 82 in
- Larsen & Marx _An Introduction to Probability and Its Applications_.
-
- Here's Mosteller's solution:
-
- A is naturally not feeling cheery about this enterprise. Having the
- first shot he sees that, if he hits C, B will then surely hit him, and
- so he is not going to shoot at C. If he shoots at B and misses him,
- then B clearly {I disagree; this is not at all clear!} shoots the more
- dangerous C first, and A gets one shot at B with probability 0.3 of
- succeeding. If he misses this time, the less said the better. On the
- other hand, suppose A hits B. Then C and A shoot alternately until one
- hits. A's chance of winning is (.5)(.3) + (.5)^2(.7)(.3) +
- (.5)^3(.7)^2(.3) + ... . Each term cooresponds to a sequence of misses
- by both C and A ending with a final hit by A. Summing the geometric
- series we get ... 3/13 < 3/10. Thus hitting B and finishing off with
- C has less probability of winning for A than just missing the first shot.
- So A fires his first shot into the ground and then tries to hit B with
- his next shot. C is out of luck.
-
- As much as I respect Mosteller, I have some serious problems with this
- solution. If we allow the option of firing into the ground, then if
- all fire into the ground with every shot, each will survive with
- probability 1. Now, the argument could be made that a certain
- strategy for X that both allows them to survive with probability 1
- *and* gives less than a probability of survival of less than 1 for
- at least one of their foes would be preferred by X. However, if
- X pulls the trigger and actually hits someone what would the remaining
- person, say Y, do? If P(X hits)=1, clearly Y must try to hit X, since
- X firing at Y with intent to hit dominates any other strategy for X.
- If P(X hits)<1 and X fires at Y with intent to hit, then
- P(Y survives)<1 (since X could have hit Y). Thus, Y must insure that
- X can not follow this strategy by shooting back at X (thus insuring
- that P(X survives)<1). Therefore, I would conclude that the ideal
- strategy for all three players, assuming that they are rational and
- value survival above killing their enemies, would be to keep firing
- into the ground. If they don't value survival above killing their
- enemies (which is the only a priori assumption that I feel can be
- safely made in the absence of more information), then the problem
- can't be solved unless the function each player is trying to maximize
- is explicitly given.
- --
- -- clong@remus.rutgers.edu (Chris Long)
-
- OK - I'll have a go at this.
-
- How about the payoff function being 1 if you win the "duel" (i.e. if at some
- point you are still standing and both the others have been shot) and 0
- otherwise? This should ensure that an infinite sequence of deliberate misses
- is not to anyone's advantage. Furthermore, I don't think simple survival
- makes a realistic payoff function, since people with such a payoff function
- would not get involved in the fight in the first place!
-
- [ I.e. I am presupposing a form of irrationality on the part of the
- fighters: they're only interested in survival if they win the duel. Come
- to think of it, this may be quite rational - spending the rest of my life
- firing a gun into the ground would be a very unattractive proposition to
- me :-)
- ]
-
- Now, denote each position in the game by the list of people left standing,
- in the order in which they get their turns (so the initial position is
- (A,B,C), and the position after A misses the first shot (B,C,A)). We need to
- know the value of each possible position for each person.
-
- By definition:
-
- valA(A) = 1 valB(A) = 0 valC(A) = 0
- valA(B) = 0 valB(B) = 1 valC(B) = 0
- valA(C) = 0 valB(C) = 0 valC(C) = 1
-
- Consider the two player position (X,Y). An infinite sequence of misses has
- value zero to both players, and each player can ensure a positive payoff by
- trying to shoot the other player. So both players deliberately missing is a
- sub-optimal result for both players. The question is then whether both
- players should try to shoot the other first, or whether one should let the
- other take the first shot. Since having the first shot is always an
- advantage, given that some real shots are going to be fired, both players
- should try to shoot the other first. It is then easy to establish that:
-
- valA(A,B) = 3/10 valB(A,B) = 7/10 valC(A,B) = 0
- valA(B,A) = 0 valB(B,A) = 1 valC(B,A) = 0
- valA(B,C) = 0 valB(B,C) = 1 valC(B,C) = 0
- valA(C,B) = 0 valB(C,B) = 5/10 valC(C,B) = 5/10
- valA(C,A) = 3/13 valB(C,A) = 0 valC(C,A) = 10/13
- valA(A,C) = 6/13 valB(A,C) = 0 valC(A,C) = 7/13
-
- Now for the three player positions (A,B,C), (B,C,A) and (C,A,B). Again, the
- fact that an infinite sequence of misses is sub-optimal for all three
- players means that at least one player is going to decide to fire. However,
- it is less clear than in the 2 player case that any particular player is
- going to fire. In the 2 player case, each player knew that *if* it was
- sub-optimal for him to fire, then it was optimal for the other player to
- fire *at him* and that he would be at a disadvantage in the ensuing duel
- because of not having got the first shot. This is not necessarily true in
- the 3 player case.
-
- Consider the payoff to A in the position (A,B,C). If he shoots at B, his
- expected payoff is:
-
- 0.3*valA(C,A) + 0.7*valA(B,C,A) = 9/130 + 0.7*valA(B,C,A)
-
- If he shoots at C, his expected payoff is:
-
- 0.3*valA(B,A) + 0.7*valA(B,C,A) = 0.7*valA(B,C,A)
-
- And if he deliberately misses, his expected payoff is:
-
- valA(B,C,A)
-
- Since he tries to maximise his payoff, we can immediately eliminate shooting
- at C as a strategy - it is strictly dominated by shooting at B. So A's
- expected payoff is:
-
- valA(A,B,C) = MAX(valA(B,C,A), 9/130 + 0.7*valA(B,C,A))
-
- A similar argument shows that C's expected payoffs in the (C,A,B) position are:
-
- For shooting at A: 0.5*valC(A,B,C)
- For shooting at B: 35/130 + 0.5*valC(A,B,C)
- For missing: valC(A,B,C)
-
- So C either shoots at B or deliberately misses, and:
-
- valC(C,A,B) = MAX(valC(A,B,C), 35/130 + 0.5*valC(A,B,C))
-
- Each player can obtain a positive expected payoff by shooting at one of the
- other players, and it is known that an infinite sequence of misses will
- result in a zero payoff for all players. So it is known that some player's
- strategy must involve shooting at another player rather than deliberately
- missing.
-
- Now look at this from the point of view of player B. He knows that *if* it
- is sub-optimal for him to shoot at another player, then it is optimal for at
- least one of the other players to shoot. He also knows that if the other
- players choose to shoot, they will shoot *at him*. If he deliberately
- misses, therefore, the best that he can hope for is that they miss him and
- he is presented with the same situation again. This is clearly less good for
- him than getting his shot in first. So in position (B,C,A), he must shoot at
- another player rather than deliberately miss.
-
- B's expected payoffs are:
-
- For shooting at A: valB(C,B) = 5/10
- For shooting at C: valB(A,B) = 7/10
-
- So in position (B,C,A), B shoots at C for an expected payoff of 7/10. This
- gives us:
-
- valA(B,C,A) = 3/10 valB(B,C,A) = 7/10 valC(B,C,A) = 0
-
- So valA(A,B,C) = MAX(3/10, 9/130 + 21/100) = 3/10, and A's best strategy is
- position (A,B,C) is to deliberately miss, giving us:
-
- valA(A,B,C) = 3/10 valB(A,B,C) = 7/10 valC(A,B,C) = 0
-
- And finally, valC(C,A,B) = MAX(0, 35/130 + 0) = 7/26, and C's best strategy
- in position (C,A,B) is to shoot at B, giving us:
-
- valA(C,A,B) = 57/260 valB(C,A,B) = 133/260 valC(C,A,B) = 7/26
-
- I suspect that, with this payoff function, all positions with 3 players can
- be resolved. For each player, we can establish that if their correct
- strategy is to fire at another player, then it is to fire at whichever of
- the other players is more dangerous. The most dangerous of the three players
- then finds that he has nothing to lose by firing at the second most
- dangerous.
-
- Questions:
-
- (a) In the general case, what are the optimal strategies for the other two
- players, possibly as functions of the hit probabilities and the cyclic
- order of the three players?
-
- (b) What happens in the 4 or more player case?
-
- -- David Seal <dseal@armltd.co.uk>
-
- ==> english/acronym.p <==
- What acronyms have become common words?
-
- ==> english/acronym.s <==
- The following is the list of acronyms which have become common nouns.
- An acronym is "a word formed from the initial letter or letters of each
- of the successive parts or major parts of a compound term" (Webster's Ninth).
- A common noun will occur uncapitalized in Webster's Ninth.
-
- Entries in the following table include the year in which they first
- entered the language (according to the Ninth), and the Merriam-Webster
- dictionary that first contains them. The following symbols are used:
-
- NI1 New International (1909)
- NI1+ New Words section of the New International (1931)
- NI2 New International Second Edition (1934)
- NI2+ Addendum section of the Second (1959, same as 1954)
- NI3 Third New International (1961)
- 9C Ninth New Collegiate (1983)
- 12W 12,000 Words (separately published addendum to the Third, 1986)
-
- asdic Anti-Submarine Detection Investigation Committee (1940, NI2+)
- dew Distant Early Warning (1953, 9C)
- dopa DihydrOxyPhenylAlanine (1917, NI3)
- fido Freaks + Irregulars + Defects + Oddities (1966, 9C)
- jato Jet-Assisted TakeOff (1947, NI2+)
- laser Light Amplification by Stimulated Emission of Radiation (1957, NI3)
- lidar LIght Detection And Ranging (1963, 9C)
- maser Microwave Amplification by Stimulated Emission of Radiation (1955, NI3)
- nitinol NIckel + TIn + Naval Ordinance Laboratory (1968, 9C)
- rad Radiation Absorbed Dose (1918, NI3)
- radar RAdio Detection And Ranging (ca. 1941, NI2+)
- rem Roentgen Equivalent Man (1947, NI3)
- rep Roentgen Equivalent Physical (1947, NI3)
- scuba Self-Contained Underwater Breathing Apparatus (1952, NI3)
- snafu Situation Normal -- All Fucked (Fouled) Up (ca. 1940, NI2+)
- sofar SOund Fixing And Ranging (1946, NI2+)
- sonar SOund NAvigation Ranging (1945, NI2+)
- tepa Tri-Ethylene Phosphor-Amide (1953, 9C)
- zip Zone Improvement Plan (1963, 9C)
-